International Journal of Information Technology and Computer Science (IJITCS)

ISSN: 2074-9007 (Print)

ISSN: 2074-9015 (Online)

DOI: https://doi.org/10.5815/ijitcs

Website: https://www.mecs-press.org/ijitcs

Published By: MECS Press

Frequency: 6 issues per year

Number(s) Available: 137

(IJITCS) in Google Scholar Citations / h5-index

IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.

 

IJITCS has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..

Latest Issue
Most Viewed
Most Downloaded

IJITCS Vol. 17, No. 3, Jun. 2025

REGULAR PAPERS

Spectrogram-based Deep Learning Approach for Anomaly Detection from Cough Sounds

By Tugce Keles Sengul Dogan Abdul-Hafeez Baig Turker Tuncer

DOI: https://doi.org/10.5815/ijitcs.2025.03.01, Pub. Date: 8 Jun. 2025

Artificial intelligence is now applied in many fields beyond computer science. In healthcare, it enables early disease detection and improves patient outcomes. This study develops a model that uses AI to find abnormal patterns in cough sounds. A cough is a key symptom of asthma and other respiratory diseases. Previous research has focused on raw audio signals of coughs. In contrast, we analyze spectrogram images derived from these sounds to improve accuracy. We designed a new convolutional neural network (CNN) for this purpose and the recommended CNN is termed as TwoConvNeXt. To showcase the classification performance of the recommended TwoConvNeXt model, a cough sound dataset has been utilized and the recommended TwoConvNeXt achieved 99.66% classification test accuracy. 
These results illustrate that the presented TwoConvNeXt CNN architecture can be useful in both research and clinical settings. This CNN model can be utilized for other image classification problems. It may aid in the early diagnosis of respiratory conditions. Future work will expand the dataset and test the model on larger, more diverse samples.

[...] Read more.
Traffic Sign Detection and Recognition Using Yolo Models

By Mareeswari V. Vijayan R. Shajith Nisthar Rahul Bala Krishnan

DOI: https://doi.org/10.5815/ijitcs.2025.03.02, Pub. Date: 8 Jun. 2025

With the proliferation of advanced driver assistance systems and continued advances in autonomous vehicle technology, there is a need for accurate, real-time methods of identifying and interpreting traffic signs. The importance of traffic sign detection can't be overstated, as it plays a pivotal role in improving road safety and traffic management. This proposed work suggests a unique real-time traffic sign detection and recognition approach using the YOLOv8 algorithm. Utilizing the integrated webcams of personal computers and laptops, we capture live traffic scenes and train our model using a meticulously curated dataset from Roboflow. Through extensive training, our YOLOv8 version achieves an excellent accuracy rate of 94% compared to YOLOV7 at 90.1% and YOLOv5 at 81.3%, ensuring reliable detection and recognition across various environmental conditions. Additionally, this proposed work introduces an auditory alert feature that notifies the driver with a voice alert upon detecting traffic signs, enhancing driver awareness and safety. Through rigorous experimentation and evaluation, we validate the effectiveness of our approach, highlighting the importance of utilizing available hardware resources to deploy traffic sign detection systems with minimal infrastructure requirements. Our findings underscore the robustness of YOLOv8 in handling challenging traffic sign recognition tasks, paving the way for widespread adoption of intelligent transportation technologies and fostering the introduction of safer and more efficient road networks. In this paper, we compare the unique model of YOLO with YOLOv5, YOLOv7, and YOLOv8, and find that YOLOv8 outperforms its predecessors, YOLOv7 and YOLOv5, in traffic sign detection with an excellent overall mean average precision of 0.945. Notably, it demonstrates advanced precision and recall, especially in essential sign classes like "No overtaking" and "Stop," making it the favored preference for accurate and dependable traffic sign detection tasks.

[...] Read more.
Analyzing Price Dynamics, Activity of Players and Reviews of Popular Indie Games on Steam Post-COVID-19 Pandemic using SteamDB

By Araz R. Aliev Tofig A. Aliyev Rustam Eyniyev

DOI: https://doi.org/10.5815/ijitcs.2025.03.03, Pub. Date: 8 Jun. 2025

This study thoroughly looks at how the prices and activity of players of popular indie games on Steam changed after COVID-19. It uses data from SteamDB, which has lots of info about game availability, sales, prices, activity of players, followers, positive and negative reviews on Steam and Twitch viewers. The goal is to deeply analyze how indie game makers and publishers set their prices and reactions of players to games which release date start from period before, during and after COVID-19. The focus is on how they changed their pricing models due to big shifts in market demand and consumer behavior because of the pandemic and how players reacted to these price changes in the context of wage cuts and layoffs. Reactions of players can be tracked not only by the statistics of the maximum or average online in the game, but also by the number of positive and negative reviews, because in difficult times it was important for players to correctly distribute their available funds and not to become disappointed in game and not to let other players become disappointed
By studying these changes, the aim is to find out how the indie game industry responded to tough times and new chances in the digital entertainment world. Since the study is being conducted in the post-COVID-19 period, it is also aimed at helping developers choose the right strategy when pricing their new Indie games or changing the prices of their existing Steam Indie games.
The main objects of research are indie games because this genre is one of the most popular in SteamDB, and to create such games requires less costs, therefore their price is acceptable for the average player.

[...] Read more.
Cross-platform Fake Review Detection: A Comparative Analysis of Supervised and Deep Learning Models

By Faryad Bigdeli

DOI: https://doi.org/10.5815/ijitcs.2025.03.04, Pub. Date: 8 Jun. 2025

This project addresses the growing issue of fake reviews by developing models capable of detecting them across different platforms. By merging five distinct datasets, a comprehensive dataset was created, and various features were added to improve accuracy. The study compared traditional supervised models like Logistic Regression and SVM with deep learning models. Notably, simpler supervised models consistently outperformed deep learning approaches in identifying fake reviews. The findings highlight the importance of choosing the right model and feature engineering approach, with results showing that additional features don’t always improve model performance.

[...] Read more.
Prioritization of Barriers to Digitization for Circular Systems using Analytical Hierarchy Process

By Mangesh P. Joshi

DOI: https://doi.org/10.5815/ijitcs.2025.03.05, Pub. Date: 19 Aug. 2025

The increasing urgency for sustainable practices has motivated this research to explore the barriers hindering the adoption of digital technologies in circular systems. As industries seek to leverage IoT for enhanced efficiency and sustainability, understanding these barriers is crucial for effective implementation. This study employs a comprehensive, multi–dimensional approach, integrating insights from a literature review and expert interviews with industry professionals. Key findings reveal that technological complexity and high initial costs are the most significant barriers, highlighting the need for targeted strategies to address these challenges. Additional barriers include regulatory compliance issues and unclear return on investment, which further complicate the adoption process. The study's conclusion emphasizes that overcoming these barriers is essential for facilitating the successful integration of digital technologies in circular economies. Furthermore, the research identifies the necessity for future investigations into the interactions between these barriers and the effectiveness of various interventions. The novelty of this study lies in its holistic examination of the multifaceted barriers, combining qualitative insights with a structured analytical framework. This approach not only contributes to the existing literature on Digitization but also offers practical implications for stakeholders aiming to enhance sustainability and efficiency in their operations. By addressing the identified challenges, organizations can pave the way for a more circular and resilient future, ultimately driving innovation and growth in the rapidly evolving digital landscape.

[...] Read more.
Evaluating Energy-efficiency and Performance of Cloud-based Healthcare Systems Using Power-aware Algorithms: An Experimental Simulation Approach for Public Hospitals

By Aschalew Arega Durga Prasad Sharma

DOI: https://doi.org/10.5815/ijitcs.2025.03.06, Pub. Date: 8 Jun. 2025

The use of cloud computing, particularly virtualized infrastructure, offers scalable resources, reduced hardware needs, and energy savings. In Ethiopian public hospitals, the lack of integrated healthcare systems and a national data repository, combined with existing systems deficiencies and inefficient traditional data centers, contribute to energy inefficiency, carbon emissions, and performance issues. Thus, evaluating the energy efficiency and performance of a cloud-based model with various workloads and algorithms is essential for its successful implementation in healthcare systems and digital health solutions. The study experimentally evaluates a cloud-based model's energy efficiency and performance for smart healthcare systems, employing descriptive and experimental designs to simulate cloud infrastructure. Simulations are conducted on diverse workloads in CloudSim using power-aware (PA) algorithms (along with VmAllocationPolicy and VmSelectionPolicy), and dynamic voltage frequency scaling (DVFS). Results reveal that the number of VMs and their migrations significantly impact energy consumption, with some algorithms achieving notable energy savings. Lr/Lrr-based algorithms are particularly energy-efficient, with LrMc and LrrMc saving 29.36% more energy than IqrMu at 55 VMs, and LrrRs saving 30.20% more at 1,765 VMs. DVFS adjusts energy consumption based on the number of VMs, while non-power-aware (NPA) consumes maximum energy based on hosts, regardless of the number of VMs. VM migrations, energy consumption, and average SLAV are positively correlated, while SLA is negatively correlated with these factors. In PlanetLab, energy consumption and average SLAV show a strong positive correlation (0.956) at Workload6, while SLA at Workload2 and average SLAV at Workload1 show a weak negative correlation (-0.055). Excessive migrations can disrupt the system's stability/performance and cause SLA violations. Task completion time is influenced by VM processing power and cloudlet length, being inversely proportional to VM processing power and directly proportional to cloudlet length. Overall, the findings suggest that cloud virtualization and energy-efficient algorithms can enhance healthcare systems performance, patient care, and operational sustainability.

[...] Read more.
Forecasting Agriculture Commodity Price Trend using Novel Competitive Ensemble Regression Model

By R. Ragunath R. Rathipriya

DOI: https://doi.org/10.5815/ijitcs.2025.03.07, Pub. Date: 8 Jun. 2025

This paper introduces a novel approach for forecasting the price trends of agricultural commodities to address the issue of price volatility faced by both farmers and consumers. The accurate forecasting of food prices is particularly crucial in emerging nations such as India where food security is a top priority. To achieve this goal, the paper presents an ensemble learning-based approach for predicting the agricultural commodity price (ACP) trend. Using dataset namely rainfall and wholesale pricing index (WPI), the study compares the performance of various individual and ensemble regression models. The findings of this work demonstrated that the novel competitive ensemble regression (CER) approach outperforms traditional individual regression models in predicting price fluctuations trend accurately. This approach has the high potential and more precise prediction to afford farmers and dealers, also make the model suitable for the financial industries.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
PDF Marksheet Generator

By Srushti Shimpi Sanket Mandare Tyagraj Sonawane Aman Trivedi K. T. V. Reddy

DOI: https://doi.org/10.5815/ijitcs.2014.11.05, Pub. Date: 8 Oct. 2014

The Marksheet Generator is flexible for generating progress mark sheet of students. This system is mainly based in the database technology and the credit based grading system (CBGS). The system is targeted to small enterprises, schools, colleges and universities. It can produce sophisticated ready-to-use mark sheet, which could be created and will be ready to print. The development of a marksheet and gadget sheet is focusing at describing tables with columns/rows and sub-column sub-rows, rules of data selection and summarizing for report, particular table or column/row, and formatting the report in destination document. The adjustable data interface will be popular data sources (SQL Server) and report destinations (PDF file). Marksheet generation system can be used in universities to automate the distribution of digitally verifiable mark-sheets of students. The system accesses the students’ exam information from the university database and generates the gadget-sheet Gadget sheet keeps the track of student information in properly listed manner. The project aims at developing a marksheet generation system which can be used in universities to automate the distribution of digitally verifiable student result mark sheets. The system accesses the students’ results information from the institute student database and generates the mark sheets in Portable Document Format which is tamper proof which provides the authenticity of the document. Authenticity of the document can also be verified easily.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
A Systematic Review of Natural Language Processing in Healthcare

By Olaronke G. Iroju Janet O. Olaleke

DOI: https://doi.org/10.5815/ijitcs.2015.08.07, Pub. Date: 8 Jul. 2015

The healthcare system is a knowledge driven industry which consists of vast and growing volumes of narrative information obtained from discharge summaries/reports, physicians case notes, pathologists as well as radiologists reports. This information is usually stored in unstructured and non-standardized formats in electronic healthcare systems which make it difficult for the systems to understand the information contents of the narrative information. Thus, the access to valuable and meaningful healthcare information for decision making is a challenge. Nevertheless, Natural Language Processing (NLP) techniques have been used to structure narrative information in healthcare. Thus, NLP techniques have the capability to capture unstructured healthcare information, analyze its grammatical structure, determine the meaning of the information and translate the information so that it can be easily understood by the electronic healthcare systems. Consequently, NLP techniques reduce cost as well as improve the quality of healthcare. It is therefore against this background that this paper reviews the NLP techniques used in healthcare, their applications as well as their limitations.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data

By Saket Sultania Rohit Sonawane Prashasti Kanikar

DOI: https://doi.org/10.5815/ijitcs.2025.01.01, Pub. Date: 8 Feb. 2025

Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.

[...] Read more.
Markov Models Applications in Natural Language Processing: A Survey

By Talal Almutiri Farrukh Nadeem

DOI: https://doi.org/10.5815/ijitcs.2022.02.01, Pub. Date: 8 Apr. 2022

Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state.  The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.

[...] Read more.
Design and Implementation of a Web-based Document Management System

By Samuel M. Alade

DOI: https://doi.org/10.5815/ijitcs.2023.02.04, Pub. Date: 8 Apr. 2023

One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.

[...] Read more.
Cardiotocography Data Analysis to Predict Fetal Health Risks with Tree-Based Ensemble Learning

By Pankaj Bhowmik Pulak Chandra Bhowmik U. A. Md. Ehsan Ali Md. Sohrawordi

DOI: https://doi.org/10.5815/ijitcs.2021.05.03, Pub. Date: 8 Oct. 2021

A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.

[...] Read more.
Accident Response Time Enhancement Using Drones: A Case Study in Najm for Insurance Services

By Salma M. Elhag Ghadi H. Shaheen Fatmah H. Alahmadi

DOI: https://doi.org/10.5815/ijitcs.2023.06.01, Pub. Date: 8 Dec. 2023

One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.

[...] Read more.
Multi-Factor Authentication for Improved Enterprise Resource Planning Systems Security

By Carolyne Kimani James I. Obuhuma Emily Roche

DOI: https://doi.org/10.5815/ijitcs.2023.03.04, Pub. Date: 8 Jun. 2023

Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.

[...] Read more.
Performance of Machine Learning Algorithms with Different K Values in K-fold Cross-Validation

By Isaac Kofi Nti Owusu Nyarko-Boateng Justice Aning

DOI: https://doi.org/10.5815/ijitcs.2021.06.05, Pub. Date: 8 Dec. 2021

The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.

[...] Read more.
Advanced Applications of Neural Networks and Artificial Intelligence: A Review

By Koushal Kumar Gour Sundar Mitra Thakur

DOI: https://doi.org/10.5815/ijitcs.2012.06.08, Pub. Date: 8 Jun. 2012

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.

[...] Read more.
Detecting and Preventing Common Web Application Vulnerabilities: A Comprehensive Approach

By Najla Odeh Sherin Hijazi

DOI: https://doi.org/10.5815/ijitcs.2023.03.03, Pub. Date: 8 Jun. 2023

Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.

[...] Read more.
A Systematic Literature Review of Studies Comparing Process Mining Tools

By Cuma Ali Kesici Necmettin Ozkan Sedat Taskesenlioglu Tugba Gurgen Erdogan

DOI: https://doi.org/10.5815/ijitcs.2022.05.01, Pub. Date: 8 Oct. 2022

Process Mining (PM) and PM tool abilities play a significant role in meeting the needs of organizations in terms of getting benefits from their processes and event data, especially in this digital era. The success of PM initiatives in producing effective and efficient outputs and outcomes that organizations desire is largely dependent on the capabilities of the PM tools. This importance of the tools makes the selection of them for a specific context critical. In the selection process of appropriate tools, a comparison of them can lead organizations to an effective result. In order to meet this need and to give insight to both practitioners and researchers, in our study, we systematically reviewed the literature and elicited the papers that compare PM tools, yielding comprehensive results through a comparison of available PM tools. It specifically delivers tools’ comparison frequency, methods and criteria used to compare them, strengths and weaknesses of the compared tools for the selection of appropriate PM tools, and findings related to the identified papers' trends and demographics. Although some articles conduct a comparison for the PM tools, there is a lack of literature reviews on the studies that compare PM tools in the market. As far as we know, this paper presents the first example of a review in literature in this regard.

[...] Read more.
Comparative Analysis of Multiple Sequence Alignment Tools

By Eman M. Mohamed Hamdy M. Mousa Arabi E. keshk

DOI: https://doi.org/10.5815/ijitcs.2018.08.04, Pub. Date: 8 Aug. 2018

The perfect alignment between three or more sequences of Protein, RNA or DNA is a very difficult task in bioinformatics. There are many techniques for alignment multiple sequences. Many techniques maximize speed and do not concern with the accuracy of the resulting alignment. Likewise, many techniques maximize accuracy and do not concern with the speed. Reducing memory and execution time requirements and increasing the accuracy of multiple sequence alignment on large-scale datasets are the vital goal of any technique. The paper introduces the comparative analysis of the most well-known programs (CLUSTAL-OMEGA, MAFFT, BROBCONS, KALIGN, RETALIGN, and MUSCLE). For programs’ testing and evaluating, benchmark protein datasets are used. Both the execution time and alignment quality are two important metrics. The obtained results show that no single MSA tool can always achieve the best alignment for all datasets.

[...] Read more.
Incorporating Preference Changes through Users’ Input in Collaborative Filtering Movie Recommender System

By Abba Almu Aliyu Ahmad Abubakar Roko Mansur Aliyu

DOI: https://doi.org/10.5815/ijitcs.2022.04.05, Pub. Date: 8 Aug. 2022

The usefulness of Collaborative filtering recommender system is affected by its ability to capture users' preference changes on the recommended items during recommendation process. This makes it easy for the system to satisfy users' interest over time providing good and quality recommendations. The Existing system studied fails to solicit for user inputs on the recommended items and it is also unable to incorporate users' preference changes with time which lead to poor quality recommendations. In this work, an Enhanced Movie Recommender system that recommends movies to users is presented to improve the quality of recommendations. The system solicits for users' inputs to create a user profiles. It then incorporates a set of new features (such as age and genre) to be able to predict user's preference changes with time. This enabled it to recommend movies to the users based on users new preferences. The experimental study conducted on Netflix and Movielens datasets demonstrated that, compared to the existing work, the proposed work improved the recommendation results to the users based on the values of Precision and RMSE obtained in this study which in turn returns good recommendations to the users.

[...] Read more.